Cambria Cluster / FTC 5.5.0
Akamai Cloud Kubernetes Help Documentation
Terraform Installation
Document History
Version | Date | Description |
---|---|---|
5.4.0 | 10/03/2024 | Updated for release 5.4.0.21627 (Linux) |
5.5.0 | 04/11/2025 | Updated for release 5.5.0.23529 (Linux) |
Download the online version of this document for the latest information and latest files. Always download the latest files.
Do not move forward with the installation process if you do not agree with the End User License Agreement (EULA) for our products.
You can download and read the EULA for Cambria FTC, Cambria Cluster, and Cambria License Manager from the links below:
Limitations and Security Information
Cambria FTC, Cluster, and License Manager are installed on Linux Docker containers. Limitations and security checks done for this version are included in our general Linux documents below:
Note: These documents are for informational use only. The setup for Kubernetes starts in section 2. Create Kubernetes Cluster.
Note: This document references Kubernetes version 1.32 only.
⚠️ Important: Before You Begin
PDF documents have a copy/paste issue. For best results, download this document and any referenced PDF documents in this guide and open them in a PDF viewer such as Adobe Acrobat.
For commands that are in more than one line, copy each line one by one and check that the copied command matches the one in the document.
⚠️ Critical Information: Read Before Proceeding with Installation
Before starting the installation, carefully review the following considerations. Skipping this section may result in errors, failed deployments, or misconfigurations.
Read only the Critical Information: Read Before Proceeding with Installation sections of the following documents:
1. Prerequisites
1.1. X11 Forwarding for User Interface
For Windows and Mac only, there are some special tools that need to be installed in order to be able to use the user interface from Capella's terraform installer.
If using Linux, the machine must have a graphical user interface.
1.1.1. Option 1: Microsoft Windows Tools
-
Download and install the X11 Forwarding Tool Xming:
Download vcxsrv -
Also, download and install PuTTY or similar tool that allows X11 Forwarding SSH:
Download PuTTY -
Open XLaunch and do the following:
- Window 1: Choose Multiple windows and set Display number to
0
- Window 2: Choose Start no client
- Window 3: Enable all checkboxes: Clipboard, Primary Selection, Native OpenGL, and Disable access control
- Window 4: Click on Save configuration and save this somewhere to reuse in the future
- Window 1: Choose Multiple windows and set Display number to
1.1.2. Option 2: Apple macOS Tools
- Download and install the X11 Forwarding Tool XQuartz:
Download XQuartz
2. Prepare Deployment Server
-
On the Akamai Dashboard, create a new Ubuntu Linode used for the Terraform deployment.
For best performance, choose an 8GB RAM or higher machine as X11 Forwarding uses a lot of memory on the Linode instance. -
SSH into the new Linode and install general tools:
sudo apt update
sudo apt upgrade
sudo apt install curl unzip libice6 libsm6 dbus libgtk-3-0 -
Download the Terraform package:
curl -o terraform_LKE_CambriaCluster_1_0.zip -L "https://www.dropbox.com/scl/fi/t8pgwr6otlg4lesgf3xbq/terraform_LKE_CambriaCluster_1_0.zip?rlkey=82rjt9l3at2gfcztt7sy0zsz1&st=pwzmzhpb&dl=1"
-
Unzip the package and make the shell scripts executable:
unzip terraform_LKE_CambriaCluster_1_0.zip
chmod +x *.sh
chmod +x ./TerraformVariableEditor -
Install the tools needed for deployment:
./setupTools.sh
./installLogcli.sh -
Verify the tools are installed:
kubectl version --client
helm version
terraform -v
logcli --version -
Exit from the SSH session.
3. Installation
- SSH into the instance created in section 2. Prepare Deployment Server using one of the following methods (depending on the OS being used as the SSH client):
Option 1: Windows
- Open PuTTY or similar tool. Enable X11 Forwarding in the configuration. On PuTTY, this can be found under:
Connection > SSH > X11 > X11 forwarding
- SSH into the instance with the created user. Usually, the user is
root
.
Option 2: Unix (Linux, macOS)
- Open a terminal window and SSH into the Linode instance using the
-Y
option and one of the created users. This is usuallyroot
.
Example:ssh -Y -i "mysshkey" root@123.123.123.123
- Run the following script to set secrets as environment variables. These are important details such as credentials, license keys, etc. Reference the following table:
source ./setEnvVariablesCambriaCluster.sh
Environment Variable Explanation
Variable | Explanation |
---|---|
Linode API Token | API token from Akamai Cloud's Dashboard. See guide |
PostgreSQL Password | The password for the PostgreSQL database that Cambria Cluster uses. General password rules apply. |
Cambria API Token | A token needed for making calls to the Cambria FTC web server. General token rules apply (e.g., 1234-5678-90abcdefg ). |
Web UI Users | Login credentials for the Cambria WebUIs for the Kubernetes cluster. Format: role,username,password .Allowed roles: 1. admin – Full access and user management 2. superuser – Full access 3. user – View-only Example: admin,admin,changethispassword1234,user,guest,password123 |
Argo Events Webhook Token | Token for specific Argo Events calls. General token rules apply (e.g., 1234-5678-90abcdefg ). |
Cambria FTC License Key | Provided by Capella. Starts with a '2' (e.g., 2AB122-11123A-ABC890-DEF345-ABC321-543A21 ). |
Grafana Password | Password for the Grafana Dashboard. |
Access Key | S3-compatible log storage access key (e.g., AWS_ACCESS_KEY_ID ). |
Secret Key | S3-compatible log storage secret key (e.g., AWS_SECRET_ACCESS_KEY ). |
- Run the terraform editor UI:
./TerraformVariableEditor
- Click on Open Terraform File and choose the
CambriaCluster_LKE.tf
file.
5. Terraform UI Editor Configuration
Using the UI, edit the fields accordingly. Reference the following table for values that should be changed:
- Blue: values in blue do not need to change unless the specific feature is needed / not needed
- Red: values in red are proprietary values that need to be changed based on your specific environment
Variable | Explanation |
---|---|
lke_cluster_name = CambriaFTCCluster | The name of the Kubernetes cluster |
lk_region = us-mia | The region code where the Kubernetes cluster should be deployed |
lke_manager_pool_node_count = 3 | The number of Cambria Cluster nodes to create |
workers_can_use_manager_nodes = false | Whether the Cambria Cluster nodes should also handle encoding tasks |
workersUseGPU = false | Set to true to enable NVENC capabilities |
nbGPUs = 1 | Max number of GPUs to use from encoding machines |
manager_instance_type = g6-dedicated-4 | Instance type of the Cambria Cluster nodes |
ftc_enable_auto_scaler = true | Enable Cambria FTC's autoscaler |
ftc_instance_type = g6-dedicated-8 | Instance type for autoscaled encoders |
max_ftc_instances = 5 | Max number of encoder instances |
cambria_cluster_replicas = 3 | Max number of management and replica nodes |
expose_capella_service_externally = true | Create load balancers to expose Capella |
enable_ingress = true | Create ingress for Capella applications |
host_name = myhost.com | Public domain name |
acme_registration_email = test@example.com | Email for Let's Encrypt registration |
acme_server = https://acme-staging-v02.api.letsencrypt.org/directory | ACME server URL |
enable_eventing = true | Enable Argo eventing features |
expose_grafana = true | Publicly expose Grafana dashboard |
loki_storage_type = s3_embedcred | Storage type for Loki logs |
loki_local_storage_size_gi = 100 | Size of local storage in Gi for Loki |
loki_s3_bucket_name = "" | S3 bucket name for Loki |
loki_s3_region = "" | S3 bucket region for Loki |
loki_replicas = 2 | Number of Loki replicas |
loki_max_unavailable = 3 | Max unavailable Loki pods during upgrades |
loki_log_retention_period = 7 | Days to retain logs |
- Once done click on Save Changes and close the UI.
- Run the following commands to create a Terraform plan:
terraform init && terraform plan -out lke-plan.tfplan
- Apply the Terraform plan to create the Kubernetes cluster:
terraform apply -auto-approve lke-plan.tfplan
- Set the
KUBECONFIG
environment variable:
export KUBECONFIG=kubeconfig.yaml
- Save the following files securely for future changes or redeployment:
.tfstate
filelke-plan.tfplan
CambriaCluster_LKE.tf
4. Verification and Testing
Follow the steps in section 5. Verify Cambria FTC / Cluster Installation of the main Cambria Cluster Kubernetes Installation Guide for verification and section 6. Testing Cambria FTC / Cluster for testing with Cambria FTC jobs:
Cambria Cluster and FTC 5.5.0 on Akamai Kubernetes (PDF)
5. Upgrading
5.1. Option 1: Normal Upgrade via Terraform Apply
This upgrade method is best for when changing version numbers, secrets such as the license key, WebUI users, etc., and Cambria FTC / Cluster-specific settings such as max number of pods, replicas, etc.
⚠️ Warning – Known Issues:
pgClusterPassword
cannot currently be updated via this method.- Changing the PostgreSQL version is not supported via this method.
- The region of the cluster cannot be changed.
Steps:
- Follow the steps in section 3: Installation.
- Follow the verification steps in section 4: Verification and Testing to ensure the updates were applied.
5.2. Option 2: Upgrade via Cambria Cluster Reinstallation
This is the most reliable upgrade option. It uninstalls all Cambria FTC and Cluster components and reinstalls them using a new Helm chart and values file. This will delete the database and remove all jobs from the Cambria Cluster UI.
Steps:
-
Follow section 4.2: Creating and Editing Helm Configuration File to prepare your new
capellaClusterConfig.yaml
. -
Uninstall the current deployment:
helm uninstall capella-cluster --wait
-
Reinstall using the updated values file:
helm upgrade --install capella-cluster capella-cluster.tgz --values cambriaClusterConfig.yaml
-
Wait a few minutes for the Kubernetes pods to install.
-
Verify the installation using section 4: Verification and Testing.
6. Cleanup
To clean up the environment, ensure the following steps are followed in order. If using FTC's autoscaler, verify that no leftover Cambria FTC nodes remain running.
Steps:
-
Remove Helm deployments:
helm uninstall capella-cluster -n default --wait
-
If persistent volumes remain, patch them for deletion:
kubectl get pv -o name | awk -F'/' '{print $2}' | xargs -I{} kubectl patch pv {} -p='{"spec": {"persistentVolumeReclaimPolicy": "Delete"}}'
-
Delete all contents from the monitoring namespace (Prometheus, Grafana, Loki, etc):
kubectl delete namespace monitoring
-
If
ingress-nginx
was deployed:kubectl delete namespace ingress-nginx
-
Destroy the Kubernetes cluster:
terraform destroy -auto-approve
-
In the NodeBalancers section of the cloud dashboard, delete any leftover balancers created by the Kubernetes cluster.
-
In the Volumes section, delete any remaining volumes created by the Kubernetes cluster.